fix(cli): sync all MCP prompt messages to session before agent reply#6566
Merged
angiejones merged 2 commits intomainfrom Jan 19, 2026
Merged
fix(cli): sync all MCP prompt messages to session before agent reply#6566angiejones merged 2 commits intomainfrom
angiejones merged 2 commits intomainfrom
Conversation
Fixes #6506 When using /prompt in the CLI with multi-message MCP prompts, only the final user message was being interpreted by the model. This happened because: 1. handle_prompt_command added all prompt messages to self.messages (local) 2. process_agent_response only sent self.messages.last() to agent.reply() 3. agent.reply() added that single message to the session, then retrieved the conversation from the session - missing all earlier prompt messages The fix syncs all messages except the last one to the session via session_manager.add_message() before calling process_agent_response. The last message is still added by agent.reply() as normal, ensuring the full conversation context is available to the agent.
Contributor
There was a problem hiding this comment.
Pull request overview
This PR fixes a bug where multi-message MCP prompts executed via /prompt were not fully synced to the session before the agent processed them. The root cause was that only the final message was being added to the session by agent.reply(), while earlier messages remained only in the local conversation state.
Changes:
- Added message synchronization logic to ensure all prompt messages except the last are added to the session before agent processing
The-Best-Codes
approved these changes
Jan 19, 2026
fbalicchia
pushed a commit
to fbalicchia/goose
that referenced
this pull request
Jan 23, 2026
…lock#6566) Signed-off-by: fbalicchia <fbalicchia@cuebiq.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Problem
When using
/promptin the CLI with multi-message MCP prompts (e.g., User → Assistant → User), only the final user message was being interpreted by the model.Root Cause
handle_prompt_commandcorrectly added all prompt messages toself.messages(local conversation)process_agent_responseonly sentself.messages.last()toagent.reply()agent.reply()added that single message to the session, then retrieved the conversation from the sessionFix
Before calling
process_agent_response, we now sync all messages except the last one to the session viasession_manager.add_message(). The last message is still added byagent.reply()as normal, ensuring the full conversation context is available to the agent.Type of Change
AI Assistance
Testing
ran test suite. thought about adding a new test but testing
handle_prompt_commanddirectly is complex because it requires a fullCliSessionwithAgent. I did temporarily implement it with mocking just to make sure the fix worked, but it added little value as a regression test. open to ideas if anyone thinks we need oneRelated Issues
fixes #6506